-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stable-2.13.6 #11227
Merged
Merged
stable-2.13.6 #11227
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…rvices (#10893) The "all mirror services have endpoints" check can fail in the presence of lots of mirrored services because for each service we query the kube api for its endpoints, and those calls reuse the same golang context, which ends up reaching its deadline. To fix, we create a new context object per call. ## Repro First patch `check.go` to introduce a sleep in order to simulate network latency: ```diff diff --git a/multicluster/cmd/check.go b/multicluster/cmd/check.go index b2b4158bf..f3083f436 100644 --- a/multicluster/cmd/check.go +++ b/multicluster/cmd/check.go @@ -627,6 +627,7 @@ func (hc *healthChecker) checkIfMirrorServicesHaveEndpoints(ctx context.Context) for _, svc := range mirrorServices.Items { // Check if there is a relevant end-point endpoint, err := hc.KubeAPIClient().CoreV1().Endpoints(svc.Namespace).Get(ctx, svc.Name, metav1.GetOptions{}) + time.Sleep(1 * time.Second) if err != nil || len(endpoint.Subsets) == 0 { servicesWithNoEndpoints = append(servicesWithNoEndpoints, fmt.Sprintf("%s.%s mirrored from cluster [%s]", svc.Name, svc.Namespace, svc.Labels[k8s.RemoteClusterNameLabel])) } ``` Then run the `multicluster` integration tests to setup a multicluster scenario, and then create lots of mirrored services! ```bash $ bin/docker-build # accommodate to your own arch $ bin/tests --name multicluster --skip-cluster-delete $PWD/target/cli/linux-amd64/linkerd # we are currently in the target cluster context $ k create ns testing # create pod $ k -n testing run nginx --image=nginx --restart=Never # create 50 services pointing to it, flagged to be mirrored $ for i in {1..50}; do k -n testing expose po nginx --port 80 --name "nginx-$i" -l mirror.linkerd.io/exported=true; done # switch to the source cluster $ k config use-context k3d-source # this will trigger the creation of the mirrored services, wait till the # 50 are created $ k create ns testing $ bin/go-run cli mc check --verbose github.com/linkerd/linkerd2/multicluster/cmd github.com/linkerd/linkerd2/cli/cmd linkerd-multicluster -------------------- √ Link CRD exists √ Link resources are valid * target √ remote cluster access credentials are valid * target √ clusters share trust anchors * target √ service mirror controller has required permissions * target √ service mirror controllers are running * target DEBU[0000] Starting port forward to https://0.0.0.0:34201/api/v1/namespaces/linkerd-multicluster/pods/linkerd-service-mirror-target-7c4496869f-6xsp4/portforward?timeout=30s 39327:9999 DEBU[0000] Port forward initialised √ probe services able to communicate with all gateway mirrors * target DEBU[0031] error retrieving Endpoints: client rate limiter Wait returned an error: context deadline exceeded DEBU[0032] error retrieving Endpoints: client rate limiter Wait returned an error: context deadline exceeded DEBU[0033] error retrieving Endpoints: client rate limiter Wait returned an error: context deadline exceeded DEBU[0034] error retrieving Endpoints: client rate limiter Wait returned an error: context deadline exceeded DEBU[0035] error retrieving Endpoints: client rate limiter Wait returned an error: context deadline exceeded DEBU[0036] error retrieving Endpoints: client rate limiter Wait returned an error: context deadline exceeded DEBU[0037] error retrieving Endpoints: client rate limiter Wait returned an error: context deadline exceeded ```
It's useful to modify values for `CheckURL` and `DefaultDockerRegistry`, to test different configurations of Linkerd. To modify these at build time, they must be public vars. Make `CheckURL` and `DefaultDockerRegistry` public vars, to allow for build-time modification using Go's `-ldflags`. Also, DRY up `DefaultDockerRegistry`. Signed-off-by: Andrew Seigner <[email protected]>
The `linkerd check --output` flag supports 3 formats: `table`, `json`, and `short`. The default `linkerd check` command description incorrectly printed `basic, json, short`. Other extension check commands printed `basic, json`. Modify all check output descriptions to print `table, json, short`. Signed-off-by: Andrew Seigner <[email protected]>
The `--log-level` flag did not support a `trace` level, despite the underlying `logrus` library supporting it. Also, at `debug` level, the Control Plane components were setting klog at v=12, which includes sensitive data. Introduce a `trace` log level. Keep klog at `v=12` for `trace`, change it to `v=6` for `debug`. Fixes #11132 Signed-off-by: Andrew Seigner <[email protected]>
Fixes #11163 The `servicePublisher.updateServer` function will iterate through all registered listeners and update them. However, a nil listener may temporarily be in the list of listeners if an unsubscribe is in progress. This results in a nil pointer dereference. All functions which result in updating the listeners must therefore be protected by the mutex so that we don't try to act on the list of listeners while it is being modified. Signed-off-by: Alex Leong <[email protected]>
Problem: Commands `jaeger install`, `multicluster link` give precedence to `LINKERD_DOCKER_REGISTRY` env var, whereas commands `install`, `upgrade` and `inject` give preference to `--register` flag. Solution: Make the commands consitent by giving precedence to `--register` flag in all commands. Fixes: #11115 Signed-off-by: Harsh Soni <[email protected]>
Problem - Current does Linkerd CNI Helm chart templates have hostNetwork: true set which is unnecessary and less secure. Solution - Removed hostNetwork: true from linkerd-cni Helm chart templates PR Fixes #11141 --------- Signed-off-by: Abhijeet Gaurav <[email protected]> Co-authored-by: Alejandro Pedraza <[email protected]>
hawkw
force-pushed
the
eliza/stable-2.13.6
branch
from
August 9, 2023 20:32
5d67ede
to
7387b54
Compare
In 2.13, the default inbound and outbound HTTP request queue capacity decreased from 10,000 requests to 100 requests (in PR #2078). This change results in proxies shedding load much more aggressively while under high load to a single destination service, resulting in increased error rates in comparison to 2.12 (see #11055 for details). This commit changes the default HTTP request queue capacities for the inbound and outbound proxies back to 10,000 requests, the way they were in 2.12 and earlier. In manual load testing I've verified that increasing the queue capacity results in a substantial decrease in 503 Service Unavailable errors emitted by the proxy: with a queue capacity of 100 requests, the load test described [here] observed a failure rate of 51.51% of requests, while with a queue capacity of 10,000 requests, the same load test observes no failures. Note that I did not modify the TCP connection queue capacities, or the control plane request queue capacity. These were previously configured by the same variable before #2078, but were split out into separate vars in that change. I don't think the queue capacity limits for TCP connection establishment or for control plane requests are currently resulting in instability the way the decreased request queue capacity is, so I decided to make a more focused change to just the HTTP request queues for the proxies. [here]: #11055 (comment) --- * Increase HTTP request queue capacity (linkerd/linkerd2-proxy#2449)
This release stops using the "interface" mode, and instead wait till another CNI plugin drops a proper network config and then append the linkerd CNI config to it. This avoids having pods start before proper networking is established in the node.
This stable release fixes a regression introduced in stable-2.13.0 which resulted in proxies shedding load too aggressively while under moderate request load to a single service ([#11055]). In addition, it updates the base image for the `linkerd-cni` initcontainer to resolve a CVE in `libdb` ([#11196]), fixes a race condition in the Destination controller that could cause it to crash ([#11163]), as well as fixing a number of other issues. * Control Plane * Fixed a race condition in the destination controller that could cause it to panic ([#11169]; fixes [#11193]) * Improved the granularity of logging levels in the control plane ([#11147]) * Proxy * Changed the default HTTP request queue capacities for the inbound and outbound proxies back to 10,000 requests ([#11198]; fixes [#11055]) * CLI * Updated extension CLI commands to prefer the `--registry` flag over the `LINKERD_DOCKER_REGISTRY` environment variable, making the precedence more consistent (thanks @harsh020!) (see [#11144]) * CNI * Updated `linkerd-cni` base image to resolve [CVE-2019-8457] in `libdb` ([#11196]) * Changed the CNI plugin installer to always run in 'chained' mode; the plugin will now wait until another CNI plugin is installed before appending its configuration ([#10849]) * Removed `hostNetwork: true` from linkerd-cni Helm chart templates ([#11158]; fixes [#11141]) (thanks @abhijeetgauravm!) * Multicluster * Fixed the `linkerd multicluster check` command failing in the presence of lots of mirrored services ([#10764]) [#10764]: #10764 [#10849]: #10849 [#11055]: #11055 [#11141]: #11141 [#11144]: #11144 [#11147]: #11147 [#11158]: #11158 [#11163]: #11163 [#11169]: #11169 [#11196]: #11196 [#11198]: #11198 [CVE-2019-8457]: https://avd.aquasec.com/nvd/2019/cve-2019-8457/
hawkw
force-pushed
the
eliza/stable-2.13.6
branch
from
August 9, 2023 21:03
7387b54
to
7b54511
Compare
alpeb
approved these changes
Aug 9, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
adleong
approved these changes
Aug 9, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
stable-2.13.6
This stable release fixes a regression introduced in stable-2.13.0 which
resulted in proxies shedding load too aggressively while under moderate request
load to a single service (#11055). In addition, it updates the base image for
the
linkerd-cni
initcontainer to resolve a CVE inlibdb
(#11196), fixes arace condition in the Destination controller that could cause it to crash
(#11163), as well as fixing a number of other issues.
Control Plane
panic (#11169; fixes #11163)
server_port_subscribers
gauge in the Destinationcontroller's metrics with
server_port_subscribes
andserver_port_unsubscribes
counters (#11206; fixes #10764)Proxy
outbound proxies back to 10,000 requests (#11198; fixes #11055)
CLI
--registry
flag over theLINKERD_DOCKER_REGISTRY
environment variable, making the precedence moreconsistent (thanks @harsh020!) (see #11144)
CNI
linkerd-cni
base image to resolve CVE-2019-8457 inlibdb
(#11196)
will now wait until another CNI plugin is installed before appending its
configuration (#10849)
hostNetwork: true
from linkerd-cni Helm chart templates(#11158; fixes #11141) (thanks @abhijeetgauravm!)
Multicluster
linkerd multicluster check
command failing in the presence oflots of mirrored services (#10764)